31 research outputs found

    Instance Embedding Transfer to Unsupervised Video Object Segmentation

    Full text link
    We propose a method for unsupervised video object segmentation by transferring the knowledge encapsulated in image-based instance embedding networks. The instance embedding network produces an embedding vector for each pixel that enables identifying all pixels belonging to the same object. Though trained on static images, the instance embeddings are stable over consecutive video frames, which allows us to link objects together over time. Thus, we adapt the instance networks trained on static images to video object segmentation and incorporate the embeddings with objectness and optical flow features, without model retraining or online fine-tuning. The proposed method outperforms state-of-the-art unsupervised segmentation methods in the DAVIS dataset and the FBMS dataset.Comment: To appear in CVPR 201

    CNN Architectures for Large-Scale Audio Classification

    Full text link
    Convolutional Neural Networks (CNNs) have proven very effective in image classification and show promise for audio. We use various CNN architectures to classify the soundtracks of a dataset of 70M training videos (5.24 million hours) with 30,871 video-level labels. We examine fully connected Deep Neural Networks (DNNs), AlexNet [1], VGG [2], Inception [3], and ResNet [4]. We investigate varying the size of both training set and label vocabulary, finding that analogs of the CNNs used in image classification do well on our audio classification task, and larger training and label sets help up to a point. A model using embeddings from these classifiers does much better than raw features on the Audio Set [5] Acoustic Event Detection (AED) classification task.Comment: Accepted for publication at ICASSP 2017 Changes: Added definitions of mAP, AUC, and d-prime. Updated mAP/AUC/d-prime numbers for Audio Set based on changes of latest Audio Set revision. Changed wording to fit 4 page limit with new addition

    Long-term modification of cortical synapses improves sensory perception

    Get PDF
    Synapses and receptive fields of the cerebral cortex are plastic. However, changes to specific inputs must be coordinated within neural networks to ensure that excitability and feature selectivity are appropriately configured for perception of the sensory environment. Long-lasting enhancements and decrements to rat primary auditory cortical excitatory synaptic strength were induced by pairing acoustic stimuli with activation of the nucleus basalis neuromodulatory system. Here we report that these synaptic modifications were approximately balanced across individual receptive fields, conserving mean excitation while reducing overall response variability. Decreased response variability should increase detection and recognition of near-threshold or previously imperceptible stimuli, as we found in behaving animals. Thus, modification of cortical inputs leads to wide-scale synaptic changes, which are related to improved sensory perception and enhanced behavioral performance

    Inhibitory Actions Unified by Network Integration.

    No full text
    corecore